Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Microaggressions are subtle, offensive comments that are directed at minority group members and are characteristically ambiguous in meaning. In two studies, we explored how observers interpreted such ambiguous statements by comparing microaggressions to faux pas, offenses caused by the speaker having an incidental false belief. In Experiment 1, we compared third-party observers’ blame and intentionality judgments of microaggressions with those for social faux pas. Despite judging neither microaggressions nor social faux pas to be definitively intentional, participants judged microaggressions as more blameworthy. In Experiment 2, microaggressions without explicit mental state information elicited a similar profile of judgments to those accompanied by explicit prejudiced or ignorant beliefs. Although they were, like faux pas, judged not to cause harm intentionally, microaggressive comments appeared to be judged more blameworthy on account of enduring prejudice thought to be lurking behind a speaker's false beliefs. Our current research demonstrates a distinctive profile of moral judgment for microaggressions.more » « less
-
null (Ed.)Due to their unique persuasive power, language-capable robots must be able to both act in line with human moral norms and clearly and appropriately communicate those norms. These requirements are complicated by the possibility that humans may ascribe blame differently to humans and robots. In this work, we explore how robots should communicate in moral advising scenarios, in which the norms they are expected to follow (in a moral dilemma scenario) may be different from those their advisees are expected to follow. Our results suggest that, in fact, both humans and robots are judged more positively when they provide the advice that favors the common good over an individual’s life. These results raise critical new questions regarding people’s moral responses to robots and the design of autonomous moral agents.more » « less
-
Human-robot trust is crucial to successful human-robot interaction. We conducted a study with 798 participants distributed across 32 conditions using four dimensions of human-robot trust (reliable, capable, ethical, sincere) identified by the Multi-Dimensional-Measure of Trust (MDMT). We tested whether these dimensions can differentially capture gains and losses in human-robot trust across robot roles and contexts. Using a 4 scenario × 4 trust dimension × 2 change direction between-subjects design, we found the behavior change manipulation effective for each of the four subscales. However, the pattern of results best supported a two-dimensional conception of trust, with reliable-capable and ethical-sincere as the major constituents.more » « less
-
Human behavior is frequently guided by social and moral norms, and no human community can exist without norms. Robots that enter human societies must therefore behave in norm-conforming ways as well. However, currently there is no solid cognitive or computational model available of how human norms are represented, activated, and learned. We provide a conceptual and psychological analysis of key properties of human norms and identify the demands these properties put on any artificial agent that incorporates norms—demands on the format of norm representations, their structured organization, and their learning algorithms.more » « less
An official website of the United States government

Full Text Available